Learning sets of filters using back-propagation
نویسندگان
چکیده
منابع مشابه
Learning sets of filters using back-propagation
A learning procedure, called back-propagation, for layered networks of deterministic, neuron-like units has been described previously. The ability of the procedure automatically to discover useful internal representations makes it a powerful tool for attacking difficult problems like speech recognition. This paper describes further research on the learning procedure and presents an example in w...
متن کاملUsing PHiPAC to speed error back-propagation learning
Signal processing algorithms such as neural network learning, convolution, cross-correlation, IIR ltering, etc., can be computationally time-consuming and are often used in time-critical application. This makes it desirable to achieve high e ciency on these routines. Such algorithms are often coded in assembly language to achieve optimal speed, but it is then di cult to make a full exploration ...
متن کاملthe relationship between using language learning strategies, learners’ optimism, educational status, duration of learning and demotivation
with the growth of more humanistic approaches towards teaching foreign languages, more emphasis has been put on learners’ feelings, emotions and individual differences. one of the issues in teaching and learning english as a foreign language is demotivation. the purpose of this study was to investigate the relationship between the components of language learning strategies, optimism, duration o...
15 صفحه اولScaling Relationships in Back-propagation Learning
A bstrac t. We present an empirical st udy of th e required training time for neural networks to learn to compute the parity function using the back -propagation learning algorithm, as a function of t he numb er of inp uts. The parity funct ion is a Boolean predica te whose order is equal to th e number of inpu t s. \Ve find t hat t he t rain ing time behaves roughly as 4" I where n is the num ...
متن کاملExperiments on Learning by Back Propagation
Rumelhart, Hinton and Williams [Rumelhart et al. 86] describe a learning procedure for layered networks of deterministic, neuron-like units. This paper describes further research on the learning procedure. We start by describing the units, the way they are connected, the learning procedure, and the extension to iterative nets. We then give an example in which a network learns a set of filters t...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Computer Speech & Language
سال: 1987
ISSN: 0885-2308
DOI: 10.1016/0885-2308(87)90026-x